Guideline: Quality Characteristics
Relationships
Main Description

For purposes of testing, TMap employs the set of quality characteristics shown below. Another common set of quality characteristics can be found in the international standard ISO9126. The use of a set of quality characteristics, whether from TMap or from ISO9126, is recommended as a way to check for completeness. It allows you to check that, out of all the aspects or characteristics of a system or package under test, a careful decision has been made about whether or not to test these. It makes little difference which set is applied. Often, the organisation has already made a choice. An illustration of the TMap quality characteristics comparable to ISO9126 can be found at www.tmap.net.

There are a number of reasons for keeping to the TMap set of quality characteristics and not changing to ISO9126:

  • In many organisations, TMap is the standard for testing, including the TMap set of quality characteristics. These organisations see little need to change over to another set of quality characteristics.
  • The testing of functionality is one of the most important areas of focus in testing, and is discussed a lot in this book. ISO9126 sees functionality as an umbrella concept, which takes in, for example, security and suitability. Therefore, within ISO, the testing of security and suitability fall under the testing of functionality. This is confusing in a book on testing.
  • ISO9126 is not necessarily better or worse than the TMap set; it is simply different
  • While ISO9126 is an international standard, in practice it appears that many organisations make their own little variant on this, which detracts from the authority of ISO9126 as a standard. Various organisations also follow old versions of ISO9126.

The quality characteristics distinguished by TMap:

  • Connectivity
  • Continuity
  • Data controllability
  • Effectivity
  • Efficiency
  • Flexibility
  • Functionality
  • (Suitability of) infrastructure
  • Maintainability
  • Manageability
  • Performance
  • Portability
  • Reusability
  • Security
  • Suitability
  • Testability
  • User-friendliness

A description of each quality characteristic is given below, with an indication of the ways in which the testing of these takes place in practice, referring where necessary to Test Types. There is also a table included in Allocate Test Units And Test Techniques (AST) that shows test design techniques that are usable for a number of quality characteristics.

Connectivity

The ease with which an interface can be created with another information system or within the information system, and can be changed. Connectivity is tested statically by assessing the relevant measures (such as standardisation) with the aid of a checklist. The testing of connectivity therefore concerns the evaluation of the ease with which a (new) interface can be set up or changed, and not the testing of whether an interface operates correctly. The latter is normally part of the testing of functionality.

Continuity

The certainty that the information system will continue without disruption, i.e. that it can be resumed within a reasonable time, even after a serious breakdown.

The continuity quality characteristic can be split into characteristics that can be applied in sequence, in the event of increasing disruption of the information system:

  • Reliability - the degree to which the information system remains free of breakdowns
  • Robustness - the degree to which the information system can simply proceed after the breakdown has been rectified
  • Recoverability - the ease and speed with which the information system can be resumed following a breakdown
  • Degradation factor - the ease with which the core of the information system can proceed after a part has shut down
  • Fail-over possibilities - the ease with which (a part of) the information system can be continued at another location.

Continuity can be tested statically by assessing the existence and setup of measures in the context of continuity on the basis of a checklist. Dynamic implicit testing is possible through the collecting of statistics during the execution of other tests. The simulation of long-term system usage (reliability) or the simulation of breakdown (robustness, recoverability, degradation and fail-over) are dynamic explicit tests.

Data controllability

The ease with which the accuracy and completeness of the information can be verified (over time).

Common means employed in this connection are checksums, crosschecks and audit trails. Verifiability can be statically tested, focusing on the setup of the relevant measures with the aid of a checklist, and can be dynamically explicitly tested focusing on the implementation of the relevant measure in the system.

Effectivity

The degree to which the information system is tailored to the organisation and the profile of the end users for whom it is intended, as well as the degree to which the information system contributes to the achievement of the company goals.

A usable information system increases the efficiency of the business processes. Will a new system function in practice, or not? Only the users’ organisation can answer that question. During (user) acceptance tests, this aspect is usually (implicitly) included. If the aspect of usability is explicitly recognised in the test strategy, a test type can be organised for it: the business simulation.

During a business simulation, a random group of potential users tests the usability aspects of the product in an environment that approximates as far as possible the “real-life” environment in which they plan to use the system: the simulated production environment. The test takes place based on a number of practical exercises or test scripts. In practice, the testing of usability is often combined with the testing of user-friendliness within the test type of usability; see Usability Test for further information.

Efficiency

The relationship between the performance level of the system (expressed in the transaction volume and the total speed) and the volume of resources (CPU cycles, I/O time, memory and network usage, etc.) used for these.

Efficiency is dynamically explicitly tested with the aid of tools that measure the resource usage and/or dynamically implicitly by the accumulation of statistics (by those same tools) during the execution of functionality tests. This aspect is often particularly evident with embedded systems.

Flexibility

The degree to which the user is able to introduce enhancements or variations on the information system without amending the software.

In other words, the degree to which the system can be amended by the user organisation, without being dependent on the IT department for maintenance. Flexibility is statically tested by assessing the relevant measures with the aid of a checklist. Dynamic explicit testing can take place during the (users) acceptance test, by having the user create, for example, a new mortgage variant (in the case of mortgages) or (in the case of credit cards), change the way of calculating the commission, by changing the parameters in both cases. It is often tested in this way first, before the change is actually implemented in production.

Functionality

The degree of certainty that the system processes the information accurately and completely.

The quality characteristic of functionality can be split into the characteristics of accuracy and completeness:

  • Accuracy - the degree to which the system correctly processes the supplied input and mutations according to the specifications into consistent data collections and output
  • Completeness - the certainty that all of the input and mutations are being processed by the system.

With testing, meeting the specified functionality is often the most important criterion for acceptance of the information system. Using various techniques, the functional operation can be dynamically explicitly tested.

(Suitability of) Infrastructure

The appropriateness of the hardware, the network, the system software, the DBMS and the (technical) architecture in a general sense to the relevant application and the degree to which these infrastructure elements interconnect.

The testing of this aspect can be done in various ways. The tester’s expertise as related to the infrastructural elements concerned is very important here.

Maintainability

The ease with which the information system can be adapted to new requirements of the user, to the changing external environment, or in order to correct faults.

Insight into the maintainability is obtained, for example, by registering the average effort (in the number of hours) required to solve a fault or by registering the average duration of repair (Mean Time to Repair (MTTR)). Maintainability is also tested by assessing the internal quality of the information system (including associated system documentation) with the aid of a checklist.

Insight into the structuredness of the software (an aspect of maintainability) is obtained by carrying out static tests, preferably supported by code analysis tools.

Manageability

The ease with which the information system can be placed and maintained in an operational condition.

Manageability is primarily aimed at technical system administration. The ease of installation of the information system is part of this characteristic. It can be tested statically by assessing the existence of measures and instruments that simplify or facilitate system management. Dynamic testing of system management takes place by, for example, carrying out an installation test and by carrying out the administration procedures (such as backup and recovery) in the test environment.

Performance

The speed with which the information system handles interactive and batch transactions.

Test types for performance are discussed in Performance Test.

Portability

The diversity of the hardware and software platform on which the information system can run, and the ease with which the system can be transferred from one environment to another.

Test types for portability are discussed in Portability Test.

Reusability

The degree to which parts of the information system, or of the design, can be used again for the development of other applications. If the system is to a large extent based on reusable modules, this also benefits the maintainability. Reusability is tested through assessing the information system and/or the design with the aid of a checklist.

Security

The certainty that consultation or mutation of the data can only be performed by those persons who are authorised to do so.

Test types for information security are discussed in Information Security Test.

Suitability

The degree to which the manual procedures and the automated information system interconnect, and the workability of these manual procedures for the organisation.

In the testing of suitability, the aspect of timeliness is also often included. Timeliness is defined as the degree to which the information becomes available in time to take the measures for which that information was intended. Suitability is dynamically explicitly tested with the aid of the process cycle test.

Testability

The ease and speed with which the functionality and performance level of the system (after each adjustment) can be tested.

Testability in this case concerns the total information system. The quality of the system documentation greatly influences the testability of the system. This is statically measured with the aid of the “testability review” checklist during the Preparation phase. Also for the measuring of the testability of the information system a checklist can be used. Things that (strongly) benefit the testability are:

  • Good system documentation
  • Having an (automated) regression test and other testware
  • The ease with which interim results of the system can be made visible, assessed and even manipulated
  • Various test-environment aspects, such as representativeness and an adjustable system date for purposes of time travel.

User-friendliness

The ease of operation of the system by the end users.

Often, this general definition is split into: the ease with which the end user can learn to handle the information system, and the ease with which trained users can handle the information system. It is difficult to establish an objective and workable unit of measurement for user-friendliness. However, it is often possible to give a (subjective) opinion couched in general terms concerning this aspect. User-friendliness is tested within the test type of Usability; see Usability Test for further information.